Goto

Collaborating Authors

 testing error


Discrimination-aware Channel Pruning for Deep Neural Networks

Zhuangwei Zhuang, Mingkui Tan, Bohan Zhuang, Jing Liu, Yong Guo, Qingyao Wu, Junzhou Huang, Jinhui Zhu

Neural Information Processing Systems

Channel pruning is one of the predominant approaches for deep model compression. Existing pruning methods either train from scratch with sparsity constraints on channels, or minimize the reconstruction error between the pre-trained feature maps and the compressed ones. Both strategies suffer from some limitations: the former kind is computationally expensive and difficult to converge, whilst the latter kind optimizes the reconstruction error but ignores the discriminative power of channels.







On Robustness of Principal Component Regression: Author Response

Neural Information Processing Systems

We begin by thanking all reviewers for their extremely encouraging and helpful responses. We agree that the fact we do PCR on both the training and testing covariates should be more explicitly placed in the context of transductive semi-supervised learning. We have strived to interpret our major theorem results (Thm 4.2 & Thm 5.1) by: (i) providing examples of natural generating Proposition 4.2, should be tight). Their empirical results support our theoretical guarantees.


Anant-Net: Breaking the Curse of Dimensionality with Scalable and Interpretable Neural Surrogate for High-Dimensional PDEs

Menon, Sidharth S., Jagtap, Ameya D.

arXiv.org Artificial Intelligence

Physics-informed deep learning (PIDL) represents a rapidly advancing framework that integrates known governing physical laws, typically formulated as PDEs, into the training process of deep neural networks. In contrast to conventional data-driven models that rely solely on observational data, PIDL incorporates physical constraints to guide learning, thereby enhancing generalization, reducing data dependence, and improving interpretability. This synthesis of physics and deep learning has demonstrated broad applicability in solving forward and inverse problems across scientific and engineering domains, particularly in scenarios involving limited, noisy, or deceptive data. Key methodologies under the PIDL umbrella include physics-informed neural networks (PINNs) [1, 2, 3, 4], which embed PDE constraints via automatic differentiation; sparse identification of nonlinear dynamics (SINDy) [5, 6], which infers governing equations by promoting sparsity in learned representations; and physics-informed neural operators [7, 8, 9, 10, 11], which approximate solution operators across function spaces to model families of PDEs. These approaches are particularly well-suited for high-dimensional problems, where traditional numerical solvers suffer from the curse of dimensionality. High-dimensional PDEs are integral to various scientific and engineering domains, including quantum mechanics, financial mathematics, and optimal control. Their solutions provide crucial insights into complex, multi-scale phenomena that cannot be accurately captured using lower-dimensional approximations. However, solving these equations efficiently remains a significant challenge due to the curse of dimensionality, the exponential growth in computational complexity and data requirements as the number of dimensions increases.